• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

Çмú´ëȸ ÇÁ·Î½Ãµù

Ȩ Ȩ > ¿¬±¸¹®Çå > Çмú´ëȸ ÇÁ·Î½Ãµù > Çѱ¹Á¤º¸°úÇÐȸ Çмú´ëȸ > 2020³â ÄÄÇ»ÅÍÁ¾ÇÕÇмú´ëȸ

2020³â ÄÄÇ»ÅÍÁ¾ÇÕÇмú´ëȸ

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) RRF: A Novel Technique for Forcing 3D CNN to Rethink about Temporal Features Using RGB and Residual Frames
¿µ¹®Á¦¸ñ(English Title) RRF: A Novel Technique for Forcing 3D CNN to Rethink about Temporal Features Using RGB and Residual Frames
ÀúÀÚ(Author) Md Imtiaz Hossain   Luan N.T. Huynh   Md Alamgir Hossain   Md Delowar Hossain   Junyoung Park   Seung-Jik Kim   Eui-Nam Huh  
¿ø¹®¼ö·Ïó(Citation) VOL 47 NO. 01 PP. 0461 ~ 0463 (2020. 07)
Çѱ۳»¿ë
(Korean Abstract)
Most of the recent methods are proposed for action recognition usually use two stream network. These two streams are used to predict individually by extracting RGB and Flow Features, final prediction is obtained after averaging these two predictions. In this work, We propose a novel online concatenation technique of RGB and RF feature spaces using a modified double necked 3D-ResNext-101 network. We show that RF features have similar impact on CNN with compare to the flow frames even avoiding the use of flow frames thus reduce complexity. We investigate either 3D CNN is able to extract temporal features from residual frames. We show that our RRF ( RGB and RF ) stream achieves 2.7% and 0.3% greater accuracy with compare to RGB and Flow streams respectively using basic state-of-the network(ResNext-101) over HMDB51 dataset. We use a pre-trained model which is trained using Kinetics400 dataset.
¿µ¹®³»¿ë
(English Abstract)
Å°¿öµå(Keyword)
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå